The Human Element in AI

Building ethical foundations for artificial intelligence.
Author

Anweshan Adhikari

Poster_image

Earlier this semester, I had the opportunity to attend one of Dr. Timnit Gebru’s thought-provoking lectures at Middlebury College. During this enlightening experience, I had the privilege of asking Dr. Gebru a question that delved into the practices employed by tech giants like Apple to evade scrutiny for their failure to address and eliminate unfairness within their organizations.

Her response to my query, combined with the compelling arguments she presented during her lecture, motivated me to go on a journey of exploration into the world of AI, fairness, and the some other topics shared by Dr. Gebru. With an intent to gain more knowledge, I delved into a range of resources, including articles and podcasts, from some of the most prominent voices in the field of AI and fairness such as Kate Crawford, Karen Hao and Dr. Gebru herself. In this blog post, I will be sharing some of the findings that emerged from my research and reflections.

The AI industry is experiencing remarkable growth, with advancements and innovations unfolding at a rapid pace. The increasing accessibility of AI technologies has created exciting opportunities and unlocked new possibilities for people in all professions. Yet, this accessibility brings in a multitude of challenges and ethical concerns that we must confront. Speaking with Wired magazine in 2020, Dr. Gebru underscored the speed of this development. She observed, “The pace of AI development is so fast that it’s outpacing our ability to understand and control it.”

Dr. Gebru emphasized the need for AI to evolve in a manner that’s fair and inclusive, offering benefits to everyone, not just a select few. Echoing these sentiments, The Guardian published an article in 2018, illuminating how AI’s benefits are currently skewed towards a select group. The piece pointed out how AI is leveraged in the gambling industry to anticipate consumer behavior and customize promotions, an approach designed primarily to boost industry engagement and profits.

The ripple effects of some of these prejudiced AI models are not confined to just the gambling industry. There are alarming instances of these models creeping into sectors like employment screening, where AI models have shown clear biases against individuals based on their race, gender, religion, and other discriminatory factors. A prime example is the 2015 revelation about Amazon’s AI recruitment tool, which displayed a stark bias against women, penalizing resumes that included phrases like “women’s chess club captain”. Not only just Amazon, but Google’s image recognition algorithm was also found to be biased against black people after it was found classifying black people as Gorillas. If tech giants like Amazon and Google are not evaluating the fairness of their AI models, we must critically question if smaller entities are even remotely concerned about this glaring issue. It is crucial to drive home the point that neglecting fairness in AI models is not just ethically wrong, but also socially and economically harmful.

These examples tie closely to my question to Dr. Gebru on how tech giants evade scrutiny for their failure to address and eliminate unfairness within their organizations. She answer to my question was that while companies such as Google cultivate the main narrative around AI, painting themselves as responsible and ethical, they also exploit the complexity of AI as a shield against accountability. Moreover, during her lecture, she underscored that these corporations often deflect criticism by asserting that their AI development is still in its early stages of development.

After listening to Dr. Gebru’s lecture at Middlebury College, it was evident that she considers the long term implications of AI models. Dr. Gebru spoke of a potential future future where access to doctors and medical services may be limited to a small portion of the global population, leaving the rest reliant on AI models. This scenario, while seemingly distant, is mirrored by an insightful article hosted on the postgraduate section of Harvard University’s website. The author of this article believes that physicians will be relied upon only for higher level decision making, leaving routine examinations and diagnoses to be executed by advanced AI systems . This could raise new implications, such as the accessibility of such models in tech-illiterate regions and populations of the world, as well as the accuracy of such models. In any case, It is clear that there is a crucial need for moderation and oversight in AI development, especially if future models are to be built on the models being built today.

Now, let’s shift our focus a bit. While we’re talking about biases and fairness in AI models, it’s equally important to think about how they’re developed. Karen Hao, a journalist who has written extensively about the ethical implications of AI, shares howthe AI industry profits from catastrophes. She gives the example of an economically tanked Venezuela, whereI companies like Appen are paying Venezuelans as little as $2 per hour to label data- which is an important process to train AI models. The reliance on cheap labor in Venezuela is obviously a form of exploitation where people who are in desperate need of money are being targeted. This form of exploitation is not just limited to Venezuela but developing countries such as Congo, as Dr. Gebru highlighted in her lecture, have also been a victim of exploitation.

By now, we have established that AI models are biased, and the AI industry is often exploitative. However, it is also important to assess the problem of biasness and representation within the AI industry. The reports from the AI Now Institute at New York reveal that women make up just 15 percent of AI research employees at Facebook, and only 10 per cent of the AI workforce at Google. Even more concerning is the fact that the total of black workers at tech companies including Google, Facebook and Microsoft range between 2.5 and 4 per cent.When models are developed exclusively by individuals of a particular gender, race, or other demographic factors, they may not prioritize checking for biases that affect other groups, which could lead to detrimental outcomes. During her lecture at Middlebury college, Dr. Gebru shared some of her personal encounters with sexism and harassment in the AI industry, such as facing resistance when appealing for the renaming of a conference named after adult websites which demonstrate the urgent need for greater inclusion and accountability in the AI industry.

There are obviously a lot of problems in the AI industry that need to be addressed urgently. While most companies blame technology for most of the problems relating to their AI models, Kate Crawford, a leading scholar of the social and political implications of artificial intelligence, argues that the potential harms of artificial intelligence (AI) can only be minimized through policy, not technology. Crawford believes developers should disclose how their systems work and to make their datasets available for public scrutiny. I feel like the increasing sophistication of AI models like ChatGPT, Bard, and DALL-E has made it more important than ever for developers to be transparent about how their systems work.

In today’s fast-paced tech landscape, it feels like every company around the corner is unveiling a new text-based AI. It appears that the chase to develop AI models is outpacing the commitment to ethical standards, particularly around the collection of training data. The AI industry could well be on the brink of a crisis, one that’s skillfully masked by corporations deflecting blame and sidestepping accountability.In this critical juncture, it becomes increasingly apparent that voices like Dr. Gebru’s are needed, despite some of her controversial views related to topics such as Eugenics and Cosmism. Dr. Gebru’s firsthand experience and outspokenness in this field brings a valuable perspective to these debates.

Recent initiatives, like that of the nonprofit organization Future of Life, highlight this need for caution. They issued an open letter urging a temporary 6 month halt to large AI experiments. This call to action was endorsed by high-profile figures like Elon Musk and Stephen Hawking, but was unfortunately unsuccessful. This underscores the pressing need for more introspection within the AI industry, and the crucial role figures like Dr. Gebru play in championing these concerns.

Sources

https://www.wired.com/story/google-timnit-gebru-ai-what-really-happened/ https://www.theguardian.com/technology/2018/apr/30/bookies-using-ai-to-keep-gamblers-hooked-insiders-say https://www.reuters.com/article/us-amazon-com-jobs-automation-insight/amazon-scraps-secret-ai-recruiting-tool-that-showed-bias-against-women-idUSKCN1MK08G https://www.theverge.com/2018/1/12/16882408/google-racist-gorillas-photo-recognition-algorithm-ai https://postgraduateeducation.hms.harvard.edu/trends-medicine/how-artificial-intelligence-disrupting-medicine-what-means-physicians https://www.technologyreview.com/2022/04/20/1050392/ai-industry-appen-scale-data-labels/ https://www.dazeddigital.com/science-tech/article/44059/1/artificial-intelligence-is-too-white-and-too-male-says-a-new-study https://issues.org/episode-29-artificial-intelligence-policy-crawford/ https://futureoflife.org/open-letter/pause-giant-ai-experiments/